Whether it was at our Page@Davos event, our following Page Conversation, or in passing discussion among members, generative AI tools have emerged as a critical consideration for the Communications function. Below are insights from a pair of Page Conversations on the topic; one with our East Coast/EMEA-based members, and another with our West Coast/APAC-based members.

Both audiences were polled on their use of ChatGPT. The results of both polls were comparable, with some major differences

  • 37% of the East Coast/EMEA audience had tested the tool with their teams, compared to 43% for the West Coast/APAC audience. 
  • Both audiences were predominantly concerned about accuracy, efficiency and mis/disinformation. However, concerns about bias were much more prevalent for the West Coast/APAC audience.

Members then dove into deep discussion about the benefits, setbacks, and risks of generative AI. Below are some of the considerations they shared.

Do these tools pose a threat to the communications function?

  • Not imminently; generative AI does really well at creating rough drafts, and handling formulaic, repetitive tasks. It does not do well at forward-thinking, nor can it handle concepts like trust and culture effectively. Communicators should identify how they can use these tools to automate lower-level activities, so they can be more strategic with their efforts.
  • However, the activities that many propose to delegate to ChatGPT, like copywriting and first-draft production, are usually undertaken by junior-level communicators. If we continue down the path of implementing generative AI in our organizations, we must be intentional about training early-career communicators to effectively use these new tools, while ensuring they are still building foundational skills.

This may be the first step in humans becoming a “referee” for communications

  • Some members feel this technology represents a singularity – a point in time that marks the beginning of the end for human-generated content. Bots are no longer just learning from humans, but other bots as well, and their capabilities improve with each use.
  • In the not-so-distant future, communicators may well be in a “referee” role, regulating and moderating content that is majority-produced by AI.

The risk it poses for disinformation is one of the greatest challenges communicators need to consider. 

  • The mass availability of these tools can further weaponize disinformation at scale. Bad actors, whatever their motivation may be, can now produce, publish and promote disinformation at rapid speed, all through AI-assisted tools. 
  • Head off this risk by focusing on corporate character, as it acts as a bulwark against disinformation campaigns.